198 research outputs found

    Computing with Granular Words

    Get PDF
    Computational linguistics is a sub-field of artificial intelligence; it is an interdisciplinary field dealing with statistical and/or rule-based modeling of natural language from a computational perspective. Traditionally, fuzzy logic is used to deal with fuzziness among single linguistic terms in documents. However, linguistic terms may be related to other types of uncertainty. For instance, different users search ‘cheap hotel’ in a search engine, they may need distinct pieces of relevant hidden information such as shopping, transportation, weather, etc. Therefore, this research work focuses on studying granular words and developing new algorithms to process them to deal with uncertainty globally. To precisely describe the granular words, a new structure called Granular Information Hyper Tree (GIHT) is constructed. Furthermore, several technologies are developed to cooperate with computing with granular words in spam filtering and query recommendation. Based on simulation results, the GIHT-Bayesian algorithm can get more accurate spam filtering rate than conventional method Naive Bayesian and SVM; computing with granular word also generates better recommendation results based on users’ assessment when applied it to search engine

    Gastroprotective effect of the root extract of Alpinia officinarum Hance (Zingiberoside) against acute indomethacin-induced gastric injuries in rats: Involvement of H+/K+-ATPase and prostaglandin E receptors

    Get PDF
    Purpose: To investigate the protective effects of Alpinia officinarum root ethanol extract (AOE) and galangin against acute indomethacin-induced injury on rat gastric mucosaMethods: Sprague-Dawley rats were daily treated with bismuth potassium citrate (0.08 g/kg), AOE at doses of 0.09, 0.18 and 0.36 g/kg; and galangin (0.2 g/kg) for 15 days. Then, gastric injury on rats was induced by intragastric administration of indomethacin (30 mg/kg). Blood flow and thickness of gastric mucosa were determined using neutral red clearance test and Alcian blue staining. The activity of H+/K+-ATPase was assayed using a biochemical kit. Prostaglandin E receptor expressions were assayed by western blotting.Results: High doses of ethanol extract of Alpinia officinarum root significantly inhibited H+/K+-ATPase activity by 8.12 % (p < 0.01), increased gastric mucosal blood flow (p < 0.001), enhanced mucus thickness (p < 0.05), and elevated the activities of prostaglandin E receptors 1 and 4 (p < 0.05).Galangin significantly inhibited H+/K+-ATPase activity by 4.82 % (p < 0.05) and increased gastric mucosal blood flow (p < 0.01).Conclusion: The ethanol extract of Alpinia officinarum root attenuates indomethacin-induced gastric injury by reinforcing gastric mucosal barrier and inhibiting excessive gastric acid secretion. Thus, the extract can be potentially developed for management of gastric injuries. Keywords: Galangin, Gastric mucosal barrier, Gastric acid, Prostaglandin, Indomethaci

    COPEN: Probing Conceptual Knowledge in Pre-trained Language Models

    Full text link
    Conceptual knowledge is fundamental to human cognition and knowledge bases. However, existing knowledge probing works only focus on evaluating factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge. Since conceptual knowledge often appears as implicit commonsense behind texts, designing probes for conceptual knowledge is hard. Inspired by knowledge representation schemata, we comprehensively evaluate conceptual knowledge of PLMs by designing three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts, respectively. For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark. Extensive experiments on different sizes and types of PLMs show that existing PLMs systematically lack conceptual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing human-like cognition in PLMs. COPEN and our codes are publicly released at https://github.com/THU-KEG/COPEN.Comment: Accepted by EMNLP 202

    VisKoP: Visual Knowledge oriented Programming for Interactive Knowledge Base Question Answering

    Full text link
    We present Visual Knowledge oriented Programming platform (VisKoP), a knowledge base question answering (KBQA) system that integrates human into the loop to edit and debug the knowledge base (KB) queries. VisKoP not only provides a neural program induction module, which converts natural language questions into knowledge oriented program language (KoPL), but also maps KoPL programs into graphical elements. KoPL programs can be edited with simple graphical operators, such as dragging to add knowledge operators and slot filling to designate operator arguments. Moreover, VisKoP provides auto-completion for its knowledge base schema and users can easily debug the KoPL program by checking its intermediate results. To facilitate the practical KBQA on a million-entity-level KB, we design a highly efficient KoPL execution engine for the back-end. Experiment results show that VisKoP is highly efficient and user interaction can fix a large portion of wrong KoPL programs to acquire the correct answer. The VisKoP online demo https://demoviskop.xlore.cn (Stable release of this paper) and https://viskop.xlore.cn (Beta release with new features), highly efficient KoPL engine https://pypi.org/project/kopl-engine, and screencast video https://youtu.be/zAbJtxFPTXo are now publicly available
    corecore